home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Power Tools 1993 October - Disc 2
/
Power Tools (Disc 2)(October 1993)(HP).iso
/
hotlines
/
cpethl
/
specmark
/
b01c001t.txt
next >
Wrap
Text File
|
1992-08-06
|
5KB
|
110 lines
Systems Performance Evaluation Cooperative (SPEC)
by Larry Gray, Workstation Performance Evaluation
Management Summary
The Systems Performance Evaluation Cooperative (SPEC) is not unique in
the world of computer system benchmarking and performance evaluation.
SPEC's primary goal is to create and distribute application-like
benchmarks that fairly measure the performance of computer systems.
There are other organizations (profit and non-profit) with similar
aspirations, serving the needs of computer system users and vendors by
supplying means for independent comparison of various machines.
SPEC does not intend to duplicate efforts of other groups dedicated to
benchmark standards, nor will it be a competitor. Rather SPEC seeks to
cooperate in any manner possible to further the goals of such groups to
promote consistency and control in the computer system measurement
arena. At present SPEC is aware of four active groups with similar
goals, but whose work is focused on specific types of computing -
Transaction Processing Performance council (TPC), National Computer
Graphics Association (NCGA), AFFU, and The Perfect Club. (see the
Benchmark Glossary article in this issue for details about these
organizations).
SPEC currently has 23 members that consist of virtually all the major
names in the business: Arix, AT&T, Bull S.A., Compaq, Control Data,
Data General, Digital Equipment (DEC), Dupont, Fujitsu, HP/Apollo, IBM,
Intel, Intergraph, MIPS, Motorola, NCR, Prime, Siemens, Silicon
Graphics, Solbourne, Stardent, SUN, and Unisys.
Release 1.0 Benchmarks
SPEC announced the availability of its first suite of benchmarks known
as Release 1.0 on October 2, 1989. In addition to the set of proposed
standard benchmarks, SPEC has published results on many vendor systems
in its quarterly newsletter. The trade press and industry consultants
have since produced several favorable articles on the benchmarks.
The Release 1.0 suite of benchmarks was chosen from over 50 submissions
and contains 10 CPU intensive application-based benchmarks that met the
SPEC criteria of portability, public access, and system loading or
reasonably long run-time. The benchmarks run only on UN*X or DEC VMS
systems. Portability across platforms is a key requirement that can
not be met with proprietary operating systems such as MPE.
You will doubtless find the results interesting since, in the
workstation market, many of our top competitors are represented. Some
charts will be provided in future issues of pn2 for vendor comparisons.
SPEC recommends that users examine individual benchmark results in
detail rather than rely only on the SPECmark. See the SPEC Benchmark
Release 1.0 Summary at the end of this article for data on the HP 9000
Model 834.
The SPECmark
The SPECmark, or single number, is similar to VAXMIPS (integer MIPS)
but this first suite, heavily weighted towards floating point, is
predominately CPU intensive. It was the SPEC members' intent and
desire to offer a set of benchmarks that did encompass CPU, I/O,
memory, graphics, and other system components but such code was simply
not available, portable, or feasible to use. Also, SPEC members are
not advocates of the "one number theory" for system performance rating.
The rationale is that if SPEC did not create a single number then the
press would find various ways of summarizing results thereby diluting
the value of otherwise tightly controlled metrics.
The Future
SPEC members have been working diligently to add to the Release 1.0
suite and a release 2.0 is slated for the first quarter of 1991.
Release 2.0 does consist of several "system" benchmarks that do
significant disk I/O and multi-tasking. Release 2.0 will also bring
additional metrics. The SPECmark will be re-defined to include I/O
performance, while I/O, Floating Point and CPU performance will also be
separately reported.
All new benchmarks will be made as compliant with POSIX standards as
possible. This may allow execution on some POSIX compliant proprietary
operating systems.
SPEC Release 3.0 may well contain multi-user benchmarks where
throughput is the primary measure. SPEC's goal is to constantly
improve the suite by adding better application based benchmarks and
eliminating others.
Conclusions
The SPEC benchmarks seem to have been well accepted by the user
community. We see SPEC results requirements showing up in more and
more RFPs (Request For Proposal). The US Navy and Air Force have
adopted the SPECmark as their metric for processor performance as have
several Fortune 500 companies.
SPEC meetings are open and very well attended. Most vendors have 2 to
4 people at each meeting and many people attend from non-member
companies. HP usually has 3 and could use more. Anyone who could
participate in SPEC on a long-term basis is encouraged to join us. The
more engineering effort applied, the sooner you will see more and
better benchmarks in the marketplace.
If you are interested in keeping up with SPEC results you are
encouraged to subscribe to the SPEC newsletter. Four issues costs on
$150 and can be obtained by writing or calling:
Waterside Associates (SPEC administrators)
39150 Paseo Padre Parkway
Fremont, CA 94538
(415) 792-2901